``
AI is no longer something that is “coming” to the charity sector. It is already here.
Across the UK, charities are quietly using AI every day to write bids, draft communications, analyse data and manage workload. Much of this use is informal, ungoverned and happening without shared understanding or leadership. In many cases, trustees do not realise it is happening at all. This gap between use and oversight creates both opportunity and real risk.
The Future Charity Report – Part 1 brings together the strongest evidence yet on Charity Sector AI adoption. Drawing on national surveys, real‑time benchmarking from thousands of charities and trusted external research, it provides a clear and grounded picture of where the sector stands today.
We are very grateful to the GSR Foundation whose funding makes our work possible. We would also like to thank Microsoft and the people there whom we work with and our corporate partners, who fund and support our work, including those who allow us to access their in-depth technology expertise, often on a pro bono basis.

“AI is being used informally, without anyone really talking about it.”
Most UK charities are already using AI in some form, typically through individual staff or volunteers using tools like Microsoft Copilot. However, it is not often being seen within the strategic context of the huge impact it will have on Society. Only a very small minority of charities are deploying AI with clear organisational oversight, agreed policies or trustee‑level ownership.
This creates a growing gap between what is happening in practice and what boards believe is happening, even though trustees remain legally responsible for AI‑related risks such as data protection, safeguarding and bias.
Learning: AI use has continued to accelerate but governance has not kept pace.
“Our organisation understands the importance of keeping up with AI, otherwise we risk being left behind.”
Around two thirds of charities describe themselves as exploring or experimenting with AI. Fewer than one in four have approved tools, policies or training in place, and fully embedded use remains uncommon.
Despite this early stage, pressure to engage is strong. Many charities feel they cannot afford to ignore AI, even if they are unsure how to proceed safely.
Learning: Momentum is building faster than confidence and we risk loss of public trust unless we act to significantly improve governance oversight of AI by trustees.
“It feels like standing at the base of a mountain and not knowing which path to take.”
Just over half of charities strongly agree that AI could benefit their organisation. At the same time, concern is widespread. Data protection is the single biggest worry, followed by safeguarding, ethics and reputational risk.
A notable minority of charities still believe AI is not relevant to their work at all, including some grant makers, highlighting how uneven understanding remains across the sector.
Learning: Interest is high, but fear and uncertainty are holding many charities back.
“We haven’t formally discussed AI at a Trustee meeting yet.”
Charity Excellence system benchmarking data shows that practical risk controls — such as data protection measures, human review of AI‑generated funding bids, and safeguarding in AI‑enabled meetings — are improving.
However, all three board‑level AI governance controls remain rated Red across the sector:
Learning: Charities are managing immediate risks, but struggling to embed AI into governance and management.
Public attitudes towards Charity Sector AI are not hostile, but they are cautious. Around a third of people feel positive, a quarter feel negative, and the rest are unsure. Support is strongest where AI is used to:
Trust drops sharply when AI is perceived to:
Learning: The public is not against our use of AI and its trust is, at least in part dependent on the type of organisations using AI – and we are very trusted. However, the Public is cautious and that trust conditional. The current extensive charity use of AI without visible, effective governance, clear human control, and honest communication creates a risk of our own making. That must change because trust is critical for charities and will be even more so in an AI enabled world of slop, scams and fake news. We need intentional transparency: being explicit about why AI is used, where humans remain in charge, how data is protected, and where AI will not be used.
Both charities and the public express unease about AI‑generated imagery. Many charities have tried it and then stopped, citing ethical, reputational or authenticity concerns.
Research shows strong public support for authentic imagery, with lower acceptance where AI images appear realistic or emotionally manipulative, especially in sensitive contexts.
Learning: AI imagery needs careful, values‑led decision‑making and transparency.
“It’s not really resistance. It’s that we don’t know enough yet.”
Where charities hesitate, it is rarely due to blanket opposition to AI. The main barriers identified are:
Charities are clear about what they need next: plain‑English training, practical guidance, ready‑to‑use policies and funding to build capacity.
Learning: Confidence will come from support, not pressure.
Resources. Charity Excellence provides the following free AI support.
“People are already using it day to day, but it’s not really acknowledged and there are no guardrails.”
The evidence shows that Charity Sector AI is already part of day‑to‑day reality but the sector is at risk of moving forward without shared standards, confidence or trust.
Charities that succeed will be those that:
A detailed, evidence‑based analysis of Charity Sector AI use, attitudes, risks and support needs in 2026.
👉Charity Excellence AI Report April 2026.
A registered charity ourselves, the CEF works for any non profit, not just charities.
Plus, 60+ policies, 8 online health checks, the Quality Mark and the huge resource base. Our AI Ready programme and Charity Excellence Learning free online AI training courses, give non profits everything they need to make effective use of AI and stay safe.
Find Funding, Free Help & Resources - Everything Is Free.
Charity Excellence data.
We are also very happy to recognise the work of others that was used in creating the report.
The methodology used is detailed at the bottom in in the downloadable full Charity AI Survey 2026 report.
The Charity Excellence AI Survey Agent was used in adducing and analysing data but under the direction and control of a human.
AI is already widely used by UK charities, most often through individual staff or volunteers using tools such as generative AI. However, this use is rarely strategic and is often informal, with limited organisational oversight or trustee involvement.
The main concern is the growing gap between day‑to‑day AI use and governance. Trustees remain legally responsible for risks such as data protection, safeguarding and bias, yet many boards are unaware of how AI is already being used within their organisations.
Most charities are still at an early stage. Around two thirds describe themselves as exploring or experimenting with AI, while fewer than one in four have approved tools, policies or training in place. Fully embedded use remains uncommon.
Most charities see AI as both an opportunity and a risk. Just over half strongly agree AI could benefit their organisation, but concern is widespread. Data protection is the single biggest worry, followed by safeguarding, ethics and reputational risk.
Key governance controls remain weak across the sector: strategic assessment of AI’s impact, clear trustee or committee responsibility for AI, and organisation‑wide training and compliance. This makes it harder to embed AI safely into management and decision‑making.
Public attitudes to charity sector AI are cautious rather than hostile. Around a third of people feel positive, a quarter feel negative, and the remainder are unsure. Trust depends heavily on how AI is used and for what purpose.
Trust is strongest when AI is used to protect funds, detect fraud or improve back‑office efficiency. It drops sharply when AI is seen to influence decisions about who receives support, replace human judgement, or use sensitive personal data without transparency.
Both charities and the public express unease about AI‑generated imagery. Many charities have tried it and then stopped due to ethical, reputational or authenticity concerns. Public acceptance is lower where images appear emotionally manipulative or are used in sensitive contexts.
Charities consistently ask for plain‑English training, practical guidance, ready‑to‑use policies and templates, and funding or time to build capacity. Confidence is more likely to come from support and clarity than pressure to move faster.